了解神经网络记住培训数据是一个有趣的问题,具有实践和理论的含义。在本文中,我们表明,在某些情况下,实际上可以从训练有素的神经网络分类器的参数中重建训练数据的很大一部分。我们提出了一种新颖的重建方案,该方案源于有关基于梯度方法的训练神经网络中隐性偏见的最新理论结果。据我们所知,我们的结果是第一个表明从训练有素的神经网络分类器中重建大部分实际培训样本的结果是可以的。这对隐私有负面影响,因为它可以用作揭示敏感培训数据的攻击。我们在一些标准的计算机视觉数据集上演示了二进制MLP分类器的方法。
translated by 谷歌翻译
GAN能够进行一代视频培训的生成和操纵任务。然而,这些单一视频GAN需要不合理的时间来训练单个视频,使它们几乎不切实际。在本文中,我们提出了从单个视频发电的GaN的必要性,并为各种生成和操纵任务引入非参数基准。我们恢复古典时空补丁 - 最近的邻居接近并使其适应可扩展的无条件生成模型,而无需任何学习。这种简单的基线令人惊讶地优于视觉质量和现实主义(通过定量和定性评估确认)的单视频导航,并且不成比例地更快(运行时从几天减少到秒)。除了不同的视频生成之外,我们使用相同的框架展示了其他应用程序,包括视频类比和时空复回靶向。我们所提出的方法很容易缩放到全高清视频。这些观察结果表明,古典方法(如果正确调整),这些任务的大幅优于重度深度学习机械。这为单视频生成和操作任务设置了新的基线,并且不太重要 - 首次从单个视频中从单个视频中产生多样化。
translated by 谷歌翻译
Representing shapes as level sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level sets.In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
translated by 谷歌翻译
Directed information (DI) is a fundamental measure for the study and analysis of sequential stochastic models. In particular, when optimized over input distributions it characterizes the capacity of general communication channels. However, analytic computation of DI is typically intractable and existing optimization techniques over discrete input alphabets require knowledge of the channel model, which renders them inapplicable when only samples are available. To overcome these limitations, we propose a novel estimation-optimization framework for DI over discrete input spaces. We formulate DI optimization as a Markov decision process and leverage reinforcement learning techniques to optimize a deep generative model of the input process probability mass function (PMF). Combining this optimizer with the recently developed DI neural estimator, we obtain an end-to-end estimation-optimization algorithm which is applied to estimating the (feedforward and feedback) capacity of various discrete channels with memory. Furthermore, we demonstrate how to use the optimized PMF model to (i) obtain theoretical bounds on the feedback capacity of unifilar finite-state channels; and (ii) perform probabilistic shaping of constellations in the peak power-constrained additive white Gaussian noise channel.
translated by 谷歌翻译
We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP). We first handle the setting of binary classification and then extend our rule to the more general setting of density estimation (with respect to the total variation metric). The existence of a universally consistent DP learner reveals a stark difference with the distribution-free PAC model. Indeed, in the latter DP learning is extremely limited: even one-dimensional linear classifiers are not privately learnable in this stringent model. Our result thus demonstrates that by allowing the learning rate to depend on the target distribution, one can circumvent the above-mentioned impossibility result and in fact, learn \emph{arbitrary} distributions by a single DP algorithm. As an application, we prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal \emph{labeled} sample complexity of $\tilde{O}(d/\varepsilon)$ labeled examples (and with an unlabeled sample complexity that can depend on the target distribution).
translated by 谷歌翻译
The post-training quantization (PTQ) challenge of bringing quantized neural net accuracy close to original has drawn much attention driven by industry demand. Many of the methods emphasize optimization of a specific degree-of-freedom (DoF), such as quantization step size, preconditioning factors, bias fixing, often chained to others in multi-step solutions. Here we rethink quantized network parameterization in HW-aware fashion, towards a unified analysis of all quantization DoF, permitting for the first time their joint end-to-end finetuning. Our single-step simple and extendable method, dubbed quantization-aware finetuning (QFT), achieves 4-bit weight quantization results on-par with SoTA within PTQ constraints of speed and resource.
translated by 谷歌翻译
Labeling large image datasets with attributes such as facial age or object type is tedious and sometimes infeasible. Supervised machine learning methods provide a highly accurate solution, but require manual labels which are often unavailable. Zero-shot models (e.g., CLIP) do not require manual labels but are not as accurate as supervised ones, particularly when the attribute is numeric. We propose a new approach, CLIPPR (CLIP with Priors), which adapts zero-shot models for regression and classification on unlabelled datasets. Our method does not use any annotated images. Instead, we assume a prior over the label distribution in the dataset. We then train an adapter network on top of CLIP under two competing objectives: i) minimal change of predictions from the original CLIP model ii) minimal distance between predicted and prior distribution of labels. Additionally, we present a novel approach for selecting prompts for Vision & Language models using a distributional prior. Our method is effective and presents a significant improvement over the original model. We demonstrate an improvement of 28% in mean absolute error on the UTK age regression task. We also present promising results for classification benchmarks, improving the classification accuracy on the ImageNet dataset by 2.83%, without using any labels.
translated by 谷歌翻译
Recently proposed Gated Linear Networks present a tractable nonlinear network architecture, and exhibit interesting capabilities such as learning with local error signals and reduced forgetting in sequential learning. In this work, we introduce a novel gating architecture, named Globally Gated Deep Linear Networks (GGDLNs) where gating units are shared among all processing units in each layer, thereby decoupling the architectures of the nonlinear but unlearned gatings and the learned linear processing motifs. We derive exact equations for the generalization properties in these networks in the finite-width thermodynamic limit, defined by $P,N\rightarrow\infty, P/N\sim O(1)$, where P and N are the training sample size and the network width respectively. We find that the statistics of the network predictor can be expressed in terms of kernels that undergo shape renormalization through a data-dependent matrix compared to the GP kernels. Our theory accurately captures the behavior of finite width GGDLNs trained with gradient descent dynamics. We show that kernel shape renormalization gives rise to rich generalization properties w.r.t. network width, depth and L2 regularization amplitude. Interestingly, networks with sufficient gating units behave similarly to standard ReLU networks. Although gatings in the model do not participate in supervised learning, we show the utility of unsupervised learning of the gating parameters. Additionally, our theory allows the evaluation of the network's ability for learning multiple tasks by incorporating task-relevant information into the gating units. In summary, our work is the first exact theoretical solution of learning in a family of nonlinear networks with finite width. The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.
translated by 谷歌翻译
面部影响的成像可用于通过成年后的儿童进行心理生理属性,特别是用于监测自闭症谱系障碍等终身疾病。深度卷积神经网络在对成年人的面部表情进行分类方面表现出了令人鼓舞的结果。但是,经过成人基准数据培训的分类器模型由于心理物理发展的差异而不适合学习儿童表情。同样,接受儿童数据训练的模型在成人表达分类中的表现较差。我们建议适应域,以同时对齐成人和儿童表达式在共享潜在空间中的分布,以确保对任何一个领域的稳健分类。此外,在成年子女表达分类中研究了面部图像的年龄变化,但仍无法掌握。我们从多个领域中汲取灵感,并提出深层自适应面部表情,以融合betamix选定的地标特征(面部自我),以进行成人的面部表情分类。在文献中,基于与表达,域和身份因素的相关性,beta分布的混合物首次用于分解和选择面部特征。我们通过两对成人孩子数据集评估面对面的自我。我们提出的面对面的方法在对齐成人和儿童表情的潜在表示方面优于成人孩子转移学习和其他基线适应方法。
translated by 谷歌翻译
Riemannian优化是解决优化问题的原则框架,其中所需的最佳被限制为光滑的歧管$ \ Mathcal {M} $。在此框架中设计的算法通常需要对歧管的几何描述,该描述通常包括切线空间,缩回和成本函数的梯度。但是,在许多情况下,由于缺乏信息或棘手的性能,只能访问这些元素的子集(或根本没有)。在本文中,我们提出了一种新颖的方法,可以在这种情况下执行近似Riemannian优化,其中约束歧管是$ \ r^{d} $的子手机。至少,我们的方法仅需要一组无噪用的成本函数$(\ x_ {i},y_ {i})\ in {\ mathcal {m}} \ times \ times \ times \ times \ times \ mathbb {r} $和内在的歧管$ \ MATHCAL {M} $的维度。使用样品,并利用歧管-MLS框架(Sober和Levin 2020),我们构建了缺少的组件的近似值,这些组件娱乐可证明的保证并分析其计算成本。如果某些组件通过分析给出(例如,如果成本函数及其梯度明确给出,或者可以计算切线空间),则可以轻松地适应该算法以使用准确的表达式而不是近似值。我们使用我们的方法分析了基于Riemannian梯度的方法的全球收敛性,并从经验上证明了该方法的强度,以及基于类似原理的共轭梯度类型方法。
translated by 谷歌翻译